skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Thayer, Harrison"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Zero shot time series forecasting is the challenge of forecasting future values of a time dependent sequence without having access to any historical data from the target series during model training. This setting differs from the traditional domain of time series forecasting, where models are typically trained using large volumes of historical data, from the same distribution. Zero shot time series forecasting models are designed to generalize to unseen time series by leveraging their knowledge learned from other, similar series during training. This work proposes two architectures designed for zero shot time series forecasting: zSiFT and zSHiFT. Both architectures use transformer models arranged in a Siamese network configuration. The zSHiFT architecture differs from the zSiFT by the introduction of a hierarchical transformer component to the Siamese network. These architectures are evaluated on vehicular traffic data in California available from the Caltrans Performance Measurement System (PeMS). The models were trained with traffic flow data collected in one region of California and then are evaluated by forecasting traffic in other regions. Forecast accuracy was evaluated at different time horizons (4 to 48 hours). The zSiFT model achieves a Mean Absolute Error (MAE) that is 8.3% lower than the baseline LSTM with attention mechanism model. The zSiFT model achieves an MAE which is 6.6% lower than zSHiFT’s MAE. 
    more » « less
    Free, publicly-accessible full text available August 6, 2026